首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   91525篇
  免费   4272篇
  国内免费   4321篇
电工技术   4177篇
技术理论   5篇
综合类   9286篇
化学工业   12673篇
金属工艺   5638篇
机械仪表   3099篇
建筑科学   4368篇
矿业工程   1266篇
能源动力   2783篇
轻工业   5867篇
水利工程   1962篇
石油天然气   4104篇
武器工业   705篇
无线电   7031篇
一般工业技术   14242篇
冶金工业   2471篇
原子能技术   2186篇
自动化技术   18255篇
  2024年   69篇
  2023年   287篇
  2022年   407篇
  2021年   603篇
  2020年   1006篇
  2019年   985篇
  2018年   1091篇
  2017年   1023篇
  2016年   1565篇
  2015年   2185篇
  2014年   4010篇
  2013年   4799篇
  2012年   4102篇
  2011年   4789篇
  2010年   3999篇
  2009年   5430篇
  2008年   5413篇
  2007年   5788篇
  2006年   5296篇
  2005年   4447篇
  2004年   3815篇
  2003年   3761篇
  2002年   3811篇
  2001年   2845篇
  2000年   3258篇
  1999年   3013篇
  1998年   2535篇
  1997年   2422篇
  1996年   2544篇
  1995年   2714篇
  1994年   2485篇
  1993年   1502篇
  1992年   1526篇
  1991年   1065篇
  1990年   784篇
  1989年   701篇
  1988年   658篇
  1987年   392篇
  1986年   229篇
  1985年   383篇
  1984年   414篇
  1983年   432篇
  1982年   331篇
  1981年   410篇
  1980年   275篇
  1979年   116篇
  1978年   115篇
  1977年   71篇
  1975年   56篇
  1974年   40篇
排序方式: 共有10000条查询结果,搜索用时 46 毫秒
91.
To minimize the mass and increase the bearing failure load of composite doublelap bolted joints, a three-step optimization strategy including feasible region reduction, optimization model decoupling and optimization was presented. In feasible region reduction, the dimensions of the feasible design region were reduced by selecting dominant design variables from numerous multilevel parameters by sensitivity analyses, and the feasible regions of variables were reduced by influence mechanism analyses. In model decoupling, the optimization model with a large number of variables was divided into various sub-models with fewer variables by variance analysis. In the third step, the optimization sub-models were solved one by one using a genetic algorithm, and the modified characteristic curve method was adopted as the failure prediction method. Based on the proposed optimization method, optimization of a double-lap single-bolt joint was performed using the ANSYS® code. The results show that the bearing failure load increased by 13.5% and that the mass decreased by 8.7% compared with those of the initial design of the joint, which validated the effectiveness of the three-step optimization strategy.  相似文献   
92.
Based on multiphase field conception and integrated with the idea of vectorvalued phase field, a phase field model for typical allotropic transformation of solid solution is proposed. The model takes the non-uniform distribution of grain boundaries of parent phase and crystal orientation into account in proper way, as being illustrated by the simulation of austenite to ferrite transformation in low carbon steel. It is found that the misorientation dependent grain boundary mobility shows strong influence on the formation of ferrite morphology comparing with the weak effect exerted by misorientation dependent grain boundary energy. The evolution of various types of grain boundaries are quantitatively characterized in terms of its respective grain boundary energy dissipation. The simulated ferrite fraction agrees well with the expectation from phase diagram, which verifies this model.  相似文献   
93.
Mobile cloud computing is an emerging field that is gaining popularity across borders at a rapid pace. Similarly, the field of health informatics is also considered as an extremely important field. This work observes the collaboration between these two fields to solve the traditional problem of extracting Electrocardiogram signals from trace reports and then performing analysis. The developed system has two front ends, the first dedicated for the user to perform the photographing of the trace report. Once the photographing is complete, mobile computing is used to extract the signal. Once the signal is extracted, it is uploaded into the server and further analysis is performed on the signal in the cloud. Once this is done, the second interface, intended for the use of the physician, can download and view the trace from the cloud. The data is securely held using a password-based authentication method. The system presented here is one of the first attempts at delivering the total solution, and after further upgrades, it will be possible to deploy the system in a commercial setting.  相似文献   
94.
With the popularity of sensor-rich mobile devices, mobile crowdsensing (MCS) has emerged as an effective method for data collection and processing. However, MCS platform usually need workers’ precise locations for optimal task execution and collect sensing data from workers, which raises severe concerns of privacy leakage. Trying to preserve workers’ location and sensing data from the untrusted MCS platform, a differentially private data aggregation method based on worker partition and location obfuscation (DP-DAWL method) is proposed in the paper. DP-DAWL method firstly use an improved K-means algorithm to divide workers into groups and assign different privacy budget to the group according to group size (the number of workers). Then each worker’s location is obfuscated and his/her sensing data is perturbed by adding Laplace noise before uploading to the platform. In the stage of data aggregation, DP-DAWL method adopts an improved Kalman filter algorithm to filter out the added noise (including both added noise of sensing data and the system noise in the sensing process). Through using optimal estimation of noisy aggregated sensing data, the platform can finally gain better utility of aggregated data while preserving workers’ privacy. Extensive experiments on the synthetic datasets demonstrate the effectiveness of the proposed method.  相似文献   
95.
This paper examines the causal relationship between oil prices and the Gross Domestic Product (GDP) in the Kingdom of Saudi Arabia. The study is carried out by a data set collected quarterly, by Saudi Arabian Monetary Authority, over a period from 1974 to 2016. We seek how a change in real crude oil price affects the GDP of KSA. Based on a new technique, we treat this data in its continuous path. Precisely, we analyze the causality between these two variables, i.e., oil prices and GDP, by using their yearly curves observed in the four quarters of each year. We discuss the causality in the sense of Granger, which requires the stationarity of the data. Thus, in the first Step, we test the stationarity by using the Monte Carlo test of a functional time series stationarity. Our main goal is treated in the second step, where we use the functional causality idea to model the co-variability between these variables. We show that the two series are not integrated; there is one causality between these two variables. All the statistical analyzes were performed using R software.  相似文献   
96.
As an unsupervised learning method, stochastic competitive learning is commonly used for community detection in social network analysis. Compared with the traditional community detection algorithms, it has the advantage of realizing the timeseries community detection by simulating the community formation process. In order to improve the accuracy and solve the problem that several parameters in stochastic competitive learning need to be pre-set, the author improves the algorithms and realizes improved stochastic competitive learning by particle position initialization, parameter optimization and particle domination ability self-adaptive. The experiment result shows that each improved method improves the accuracy of the algorithm, and the F1 score of the improved algorithm is 9.07% higher than that of original algorithm.  相似文献   
97.
In this paper, the supervised Deep Neural Network (DNN) based signal detection is analyzed for combating with nonlinear distortions efficiently and improving error performances in clipping based Orthogonal Frequency Division Multiplexing (OFDM) ssystem. One of the main disadvantages for the OFDM is the high Peak to Average Power Ratio (PAPR). The clipping is a simple method for the PAPR reduction. However, an effect of the clipping is nonlinear distortion, and estimations for transmitting symbols are difficult despite a Maximum Likelihood (ML) detection at the receiver. The DNN based online signal detection uses the offline learning model where all weights and biases at fullyconnected layers are set to overcome nonlinear distortions by using training data sets. Thus, this paper introduces the required processes for the online signal detection and offline learning, and compares error performances with the ML detection in the clipping-based OFDM systems. In simulation results, the DNN based signal detection has better error performance than the conventional ML detection in multi-path fading wireless channel. The performance improvement is large as the complexity of system is increased such as huge Multiple Input Multiple Output (MIMO) system and high clipping rate.  相似文献   
98.
In reliability analysis, the stress-strength model is often used to describe the life of a component which has a random strength (X) and is subjected to a random stress (Y). In this paper, we considered the problem of estimating the reliability R=P [Y<X] when the distributions of both stress and strength are independent and follow exponentiated Pareto distribution. The maximum likelihood estimator of the stress strength reliability is calculated under simple random sample, ranked set sampling and median ranked set sampling methods. Four different reliability estimators under median ranked set sampling are derived. Two estimators are obtained when both strength and stress have an odd or an even set size. The two other estimators are obtained when the strength has an odd size and the stress has an even set size and vice versa. The performances of the suggested estimators are compared with their competitors under simple random sample via a simulation study. The simulation study revealed that the stress strength reliability estimates based on ranked set sampling and median ranked set sampling are more efficient than their competitors via simple random sample. In general, the stress strength reliability estimates based on median ranked set sampling are smaller than the corresponding estimates under ranked set sampling and simple random sample methods.  相似文献   
99.
In this article, a new generalization of the inverse Lindley distribution is introduced based on Marshall-Olkin family of distributions. We call the new distribution, the generalized Marshall-Olkin inverse Lindley distribution which offers more flexibility for modeling lifetime data. The new distribution includes the inverse Lindley and the Marshall-Olkin inverse Lindley as special distributions. Essential properties of the generalized Marshall-Olkin inverse Lindley distribution are discussed and investigated including, quantile function, ordinary moments, incomplete moments, moments of residual and stochastic ordering. Maximum likelihood method of estimation is considered under complete, Type-I censoring and Type-II censoring. Maximum likelihood estimators as well as approximate confidence intervals of the population parameters are discussed. A comprehensive simulation study is done to assess the performance of estimates based on their biases and mean square errors. The notability of the generalized Marshall-Olkin inverse Lindley model is clarified by means of two real data sets. The results showed the fact that the generalized Marshall-Olkin inverse Lindley model can produce better fits than power Lindley, extended Lindley, alpha power transmuted Lindley, alpha power extended exponential and Lindley distributions.  相似文献   
100.
Due to its outstanding ability in processing large quantity and high-dimensional data, machine learning models have been used in many cases, such as pattern recognition, classification, spam filtering, data mining and forecasting. As an outstanding machine learning algorithm, K-Nearest Neighbor (KNN) has been widely used in different situations, yet in selecting qualified applicants for winning a funding is almost new. The major problem lies in how to accurately determine the importance of attributes. In this paper, we propose a Feature-weighted Gradient Decent K-Nearest Neighbor (FGDKNN) method to classify funding applicants in to two types: approved ones or not approved ones. The FGDKNN is based on a gradient decent learning algorithm to update weight. It updatesthe weight of labels by minimizing error ratio iteratively, so that the importance of attributes can be described better. We investigate the performance of FGDKNN with Beijing Innofund. The results show that FGDKNN performs about 23%, 20%, 18%, 15% better than KNN, SVM, DT and ANN, respectively. Moreover, the FGDKNN has fast convergence time under different training scales, and has good performance under different settings.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号